Efficient estimation of neural weights by polynomial approximation
نویسنده
چکیده
It has been known for some years that the uniformdensity problem for forward neural networks has a positive answer: Any real-valued, continuous function on a compact subset of R can be uniformly approximated by a sigmoidal neural network with one hidden layer. We design here algorithms for efficient uniform approximation by a certain class of neural networks with one hidden layer which we call nearly exponential. This class contains, e.g., all networks with the activation functions 1=(1 + e ), tanh (t), or e ^ 1 in their hidden layers. The algorithms flow from a theorem stating that such networks attain the order of approximation O (N ), d being dimension and N the number of hidden neurons. This theorem, in turn, is a consequence of a close relationship between neural networks of nearly exponential type and multivariate algebraic and exponential polynomials. The algorithms need neither a starting point nor learning parameters; they do not get stuck in local minima, and the gain in execution time relative to the backpropagation algorithm is enormous. The size of the hidden layer can be bounded analytically as a function of the precision required.
منابع مشابه
On Robust Concepts and Small Neural Nets
The universal approximation theorem for neural networks says that any reasonable function is well-approximated by a two-layer neural network with sigmoid gates but it does not provide good bounds on the number of hidden-layer nodes or the weights. However, robust concepts often have small neural networks in practice. We show an efficient analog of the universal approximation theorem on the bool...
متن کاملAlmost Linear VC Dimension Bounds for Piecewise Polynomial Networks
We compute upper and lower bounds on the VC dimension and pseudo-dimension of feedforward neural networks composed of piecewise polynomial activation functions. We show that if the number of layers is fixed, then the VC dimension and pseudo-dimension grow as WlogW, where W is the number of parameters in the network. This result stands in opposition to the case where the number of layers is unbo...
متن کاملRegression modeling in back-propagation and projection pursuit learning
We study and compare two types of connectionist learning methods for model-free regression problems: 1) the backpropagation learning (BPL); and 2) the projection pursuit learning (PPL) emerged in recent years in the statistical estimation literature. Both the BPL and the PPL are based on projections of the data in directions determined from interconnection weights. However, unlike the use of fi...
متن کاملSTRUCTURAL DAMAGE DETECTION BY MODEL UPDATING METHOD BASED ON CASCADE FEED-FORWARD NEURAL NETWORK AS AN EFFICIENT APPROXIMATION MECHANISM
Vibration based techniques of structural damage detection using model updating method, are computationally expensive for large-scale structures. In this study, after locating precisely the eventual damage of a structure using modal strain energy based index (MSEBI), To efficiently reduce the computational cost of model updating during the optimization process of damage severity detection, the M...
متن کاملEfficient agnostic learning of neural networks with bounded fan-in
We show that the class of two layer neural networks with bounded fan-in is eeciently learn-able in a realistic extension to the Probably Approximately Correct (PAC) learning model. In this model, a joint probability distribution is assumed to exist on the observations and the learner is required to approximate the neural network which minimizes the expected quadratic error. As special cases, th...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
- IEEE Trans. Information Theory
دوره 45 شماره
صفحات -
تاریخ انتشار 1999